Welcome back

It's always good to see you back here! Join us around our campfire.

Remember me

Reset password

Enter the email associated with your account and we'll send an email with instructions to reset your password

Check your email

We have sent a password recover instructions to your email.

How can businesses unlock the productivity gains of AI whilst mitigating the risk of ‘hallucinations’ and solving challenges around toxicity, privacy, bias, and data governance, asks Paul O’Sullivan, SVP Solution Engineering at Salesforce?

There is huge hype around generative AI apps like ChatGPT, with advocates arguing that this technology will increase productivity and improve efficiency for every type of company.

Yet behind the scenes, there is still a “trust gap” among leaders who understand the benefits of AI but are concerned about the risk it creates. 

Salesforce CEO Marc Benioff memorably warned that AI could be the “ultimate liar” due to the risk of “hallucinations” – a response using false or fabricated information. A Salesforce survey also found that 59% of respondents believe that the output of generative AI could be inaccurate due to these hallucinations. 

The same survey found that 67% are not prepared to implement generative AI within their technology estate due to security concerns. However, a further 83% would be willing to collaborate with other businesses to the improve ethical use of AI – showing that there is a willingness to address the trust gap and unlock the benefits of this game-changing technology. So how should companies do this? 

When Was Generative AI First Created?

To really understand generative AI and large language models (LLMs) such as at ChatGPT, we need to look at its history. In the 1950s, the concept of AI was born when early pioneers argued that machines could be trained to a point at which they mimic human intelligence. Today, we’re at an inflection point because those predictions appear to be coming true as LLMs produced by OpenAI, Cohere and many other innovators accelerate onto the scene due to the exponential increase of computing power described by Moore Law’s and an increase in the availability of processing resources. 

When we think back to the birth of the internet, it had no infrastructure and foundational backbone – both of which developed as the internet evolved. When generative AI arrived, it already had a huge infrastructure to draw upon, as well as the staggeringly large corpus of human knowledge published online during the past few decades. Behind the scenes, engineers have been training generative AI on the material available on the internet – which is truly vast in scope. We also have technology such as 5G, ubiquitous social media and an online knowledge ecosystem which gives AIs a vast amount of material to learn from and has massively accelerated the development of LLMs and generative AI. 

What Is Salesforce AI Cloud and Einstein? 

Roughly a decade ago, many organisations were just beginning their AI journey. Salesforce started out in 2014 when we launched the Office of Ethical and Humane Use of Technology. Then, in 2016 we introduced Einstein, which was the first comprehensive AI for customer relationship management (CRM). At that point in time, AI was analytical, driving predictions based on data.  We’re now in the world of generative AI, which creates content based on a text prompt.

The mobile banking revolution which took place roughly a decade ago shows us how quickly technology can be adopted at scale. As mobile phones achieved widespread adoption, the financial services sector started to deliver basic banking functionality through mobile apps, giving users the ability to perform tasks such as checking their balance or reading statements. After these apps started to roll out, we saw an exponential rate of engagement. 

Now we’re seeing the exact same trend with generative AI, which is available in accessible domains, whether that’s on a mobile app or through a web page. There has been an exponential rise in adoption as consumers use generative AI to do everything from completing their homework to challenging a parking ticket.

Learning To Trust Artificial Intelligence 

AI models can be “trained” on any large data set and then use what they learn to produce a predictive output. LLMs are trained on languages, using their analysis and predictive power to generate a linguistic output, whether that’s a simple email or a complex piece of code. When an AI is fed with pictures and visual information, it can generate complex images. If the material it learns from is computer code, it can build software. 

Trust is a fundamental issue with generative AI, with hallucinations, toxicity, privacy, bias, and data governance concerns creating a trust gap. At Salesforce, we’re tackling this challenge and helping our customers to realise the rewards of AI whilst mitigating the risks. We’ve just launched our AI Cloud which incorporates an Einstein GPT trust layer, letting our customers use prompts that are grounded within CRM data to generate content that continuously adapts to changing customer information and needs in real-time. 

When using our trust layer to instruct an LLM or other generative AI model, all data goes through a process of removing or masking personally identifiable information (PII). We mask the PII data and pass it through a secure gateway to the LLM – which could be any LLM the customer chooses to use, as long as its offers zero retention, because we don’t want the AI to store our customer data after generating a response. Every response will be processed and validated to check for any toxicity, bias, or hallucinations. If any of these problems are detected, AI will flag the data and a human will check it manually to decide whether to pass the data through to the customer or not. 

One of the key functionalities of the Einstein GPT trust layer is that we also have an audit trail which tracks all the data processed by an LLM, as well as the prompts and responses, so we can be sure that the output can be trusted. Our approach is totally unique and reduces the risk of potentially dangerous errors.

What Are The Benefits of Trusted AI?

When businesses trust AI, they can drive genuine productivity gains. Take a financial institution that deploys AI across its contact centres. If its average call handling time is 10 minutes and generative AI can shave several minutes off a call, then the rate of savings can quickly become exponential. Those calls can not only become shorter, but more efficient. Staff can be freed up and businesses can personalise their services and make deeper connections with their customer base. 

Although some staff members are going to be anxious and want to know what the advent of this technology means for their job, the reality is that generative AI can actually improve their work. It can free them from manual tasks, enabling them to take on more complex and challenging activities and engage with customers in a whole new way. AI gives them an unprecedented level of freedom and time. It is a genuine opportunity to improve NPS (net promoter score) and CSAT (customer satisfaction) scores within the financial services domain. 

We’re seeing wealth management firms use this technology to refine product positioning and enhance customer engagement with portfolios. In commercial banking, AI can generate invoices, tax returns and augment the jobs of skilled workers. In the future, AI models could be trained on regulations such as the Consumer Duty to drive automated compliance programs and audit an organisation well in advance of new rules coming into force. We’re on a maturity curve at the moment and will move beyond basic LLMs to generative AI models that have a deep contextual awareness of specific industries.  

Salesforce is already helping businesses implement AI in a safe, secure, trusted environment that delivers ROI and unlocks the rewards. Organisations should be sure to develop a clear strategy and then clearly work through it. They should also devise five experiments to test and learn this technology and then grow from that. If organisations make the right decisions today, they can bridge the trust gap and unlock the benefits of generative AI without creating risk. Salesforce is here to help. So if you’ve got trust issues around technology, be sure to get in touch to find out how we can help you. 

Find out more about Salesforce EinsteinGPT, the World’s First Generative AI for CRM. 

Explore more

Other
Articles

Other Articles

Keep indulging yourself with invaluable content on the latest developments in Open Banking and Open Finance